deepfake video
I'm watching myself on YouTube saying things I would never say. This is the deepfake menace we must confront Yanis Varoufakis
I'm watching myself on YouTube saying things I would never say. These inventions trigger rage, but also optimism. I t was my blue shirt, a present from my sister-in-law, that gave it all away. It made me think of Yakov Petrovich Golyadkin, the lowly bureaucrat in Fyodor Dostoevsky's novella The Double, a disconcerting study of the fragmented self within a vast, impersonal feudal system. It all started with a message from an esteemed colleague congratulating me on a video talk on some geopolitical theme.
- North America > United States (0.15)
- South America > Venezuela (0.05)
- Oceania > Australia (0.05)
- Europe > Ukraine (0.05)
- Leisure & Entertainment > Sports (0.72)
- Information Technology > Security & Privacy (0.46)
AI deepfakes of real doctors spreading health misinformation on social media
An investigation found that real video of medical professionals is being manipulated using AI. An investigation found that real video of medical professionals is being manipulated using AI. TikTok and other social media platforms are hosting AI-generated deepfake videos of doctors whose words have been manipulated to help sell supplements and spread health misinformation. The factchecking organisation Full Fact has uncovered hundreds of such videos featuring impersonated versions of doctors and influencers directing viewers to Wellness Nest, a US-based supplements firm. All the deepfakes involve real footage of a health expert taken from the internet.
- North America > United States (0.15)
- Europe > United Kingdom > Wales (0.05)
- Europe > United Kingdom > Scotland (0.05)
- (4 more...)
- Media > News (1.00)
- Health & Medicine (1.00)
- Government > Regional Government (1.00)
- Information Technology > Security & Privacy (0.89)
'It was extremely pornographic': Cara Hunter on the deepfake video that nearly ended her political career
'It was extremely pornographic': Cara Hunter on the deepfake video that nearly ended her political career The Irish politician was targeted in 2022, in the final weeks of her run for office. When Cara Hunter, the Irish politician, looks back on the moment she found out she had been deepfaked, she says it is "like watching a horror movie". The setting is her grandmother's rural home in the west of Tyrone on her 90th birthday, April 2022. "Everyone was there," she says. "I was sitting with all my closest family members and family friends when I got a notification through Facebook Messenger." It was from a stranger.
- Europe > United Kingdom > Northern Ireland (0.18)
- Europe > United Kingdom > Wales (0.05)
- Oceania > Australia (0.04)
- (5 more...)
- Media (1.00)
- Health & Medicine (1.00)
- Leisure & Entertainment > Sports (0.69)
- (2 more...)
A deepfake video of Nvidia's CEO sent thousands of viewers to a crypto scam
When you purchase through links in our articles, we may earn a small commission. A deepfake video of Nvidia's CEO sent thousands of viewers to a crypto scam Reportedly, the fake video had almost 100,000 live YouTube viewers at one point--far more than the real keynote stream hosted by Nvidia. Nvidia's GPU Technology Conference isn't making many waves for gamer or PC hardware crowds this year, perhaps because it seems to be exclusively interested in boosting hardware for "AI" and data centers. So it's almost ironic that a phony version of the keynote livestream reportedly relied on generative "AI" to fake CEO Jensen Huang and send viewers to a cryptocurrency scam. A YouTube channel calling itself "NVIDIA LIVE" started a livestream shortly after the real Nvidia event began, which users on Twitter reported was a deepfake video of the CEO promoting a "crypto mass adoption event."
- North America > United States > Pennsylvania (0.05)
- North America > United States > District of Columbia > Washington (0.05)
- North America > United States > California (0.05)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Hardware (1.00)
- Banking & Finance > Trading (1.00)
- Leisure & Entertainment > Games > Computer Games (0.42)
Peek-a-boo, Big Tech sees you: Expert warns just 20 cloud images can make an AI deepfake video of your child
Texas high school student Elliston Berry joins'Fox & Friends' to discuss the House's passage of a new bill that criminalizes the sharing of non-consensual intimate images, including content created with artificial intelligence. Parents love capturing their kids' big moments, from first steps to birthday candles. But a new study out of the U.K. shows many of those treasured images may be scanned, analyzed and turned into data by cloud storage services, and nearly half of parents don't even realize it. A survey of 2,019 U.K. parents, conducted by Perspectus Global and commissioned by Swiss privacy tech company Proton, found that 48% of parents were unaware providers like Google Photos, Apple iCloud, Amazon Photos and Dropbox can access and analyze the photos they upload. First lady Melania Trump, joined by President Donald Trump, delivers remarks before President Trump signed the Take it Down Act into law in the Rose Garden of the White House May 19, 2025, in Washington, D.C. (Chip Somodevilla/Getty Images) These companies use artificial intelligence to sort images into albums, recognize faces and locations and suggest memories.
- North America > United States > District of Columbia > Washington (0.37)
- North America > United States > Texas (0.26)
Scientists warn deepfakes are about to become undetectable
AI-generated deepfake videos depicting humans are getting more advanced, and more common, by the day. The most sophisticated tools can now produce manipulated content that is indistinguishable to the average human observer. Deepfake detectors, which use their own AI models to analyze video clips, attempt to bypass this deception by searching for hidden tells. One of those is the presence of a human pulse. In the past, AI models that detected a noticeable pulse or heart rate could confidently classify those clips as genuine.
ExDDV: A New Dataset for Explainable Deepfake Detection in Video
Hondru, Vlad, Hogea, Eduard, Onchis, Darian, Ionescu, Radu Tudor
The ever growing realism and quality of generated videos makes it increasingly harder for humans to spot deepfake content, who need to rely more and more on automatic deepfake detectors. However, deepfake detectors are also prone to errors, and their decisions are not explainable, leaving humans vulnerable to deepfake-based fraud and misinformation. To this end, we introduce ExDDV, the first dataset and benchmark for Explainable Deepfake Detection in Video. ExDDV comprises around 5.4K real and deepfake videos that are manually annotated with text descriptions (to explain the artifacts) and clicks (to point out the artifacts). We evaluate a number of vision-language models on ExDDV, performing experiments with various fine-tuning and in-context learning strategies. Our results show that text and click supervision are both required to develop robust explainable models for deepfake videos, which are able to localize and describe the observed artifacts. Our novel dataset and code to reproduce the results are available at https://github.com/vladhondru25/ExDDV.
- North America > Montserrat (0.04)
- Europe > Romania > Vest Development Region > Timiș County > Timișoara (0.04)
- Europe > Romania > București - Ilfov Development Region > Municipality of Bucharest > Bucharest (0.04)
Hindi audio-video-Deepfake (HAV-DF): A Hindi language-based Audio-video Deepfake Dataset
Kaur, Sukhandeep, Buhari, Mubashir, Khandelwal, Naman, Tyagi, Priyansh, Sharma, Kiran
Deepfakes offer great potential for innovation and creativity, but they also pose significant risks to privacy, trust, and security. With a vast Hindi-speaking population, India is particularly vulnerable to deepfake-driven misinformation campaigns. Fake videos or speeches in Hindi can have an enormous impact on rural and semi-urban communities, where digital literacy tends to be lower and people are more inclined to trust video content. The development of effective frameworks and detection tools to combat deepfake misuse requires high-quality, diverse, and extensive datasets. The existing popular datasets like FF-DF (FaceForensics++), and DFDC (DeepFake Detection Challenge) are based on English language.. Hence, this paper aims to create a first novel Hindi deep fake dataset, named ``Hindi audio-video-Deepfake'' (HAV-DF). The dataset has been generated using the faceswap, lipsyn and voice cloning methods. This multi-step process allows us to create a rich, varied dataset that captures the nuances of Hindi speech and facial expressions, providing a robust foundation for training and evaluating deepfake detection models in a Hindi language context. It is unique of its kind as all of the previous datasets contain either deepfake videos or synthesized audio. This type of deepfake dataset can be used for training a detector for both deepfake video and audio datasets. Notably, the newly introduced HAV-DF dataset demonstrates lower detection accuracy's across existing detection methods like Headpose, Xception-c40, etc. Compared to other well-known datasets FF-DF, and DFDC. This trend suggests that the HAV-DF dataset presents deeper challenges to detect, possibly due to its focus on Hindi language content and diverse manipulation techniques. The HAV-DF dataset fills the gap in Hindi-specific deepfake datasets, aiding multilingual deepfake detection development.
- North America > United States (0.46)
- Asia > India > Haryana (0.04)
- Asia > Middle East > UAE (0.04)
Russia behind Walz deepfake video, US intelligence community officials say
Vice presidential nominee and Minnesota Gov. Tim Walz discusses Israel's right to defend itself and Kamala Harris' economic policies on'Fox News Sunday.' A deepfake video disparaging vice presidential candidate Tim Walz was created by "Russian influence actors" who are trying to undermine Kamala Harris' campaign, U.S. intelligence community officials told Fox News. The video circulating on social media purports to show former Mankato West High School student Matthew Metro claiming that he was groped and kissed by Walz in 1997 when the Minnesota governor was a teacher there. Except the allegations are completely fabricated. "Based on newly available intelligence analysis conducted over the weekend, Russian influence actors manufactured and amplified the content," the officials told Fox News, adding that the video fit a pattern used by Russian actors in which the subject was "staged direct to camera and trying to make them go viral." These intelligence community officials also pointed out that they believe Russia is likely to be more aggressive in its efforts to sow division in the U.S. post-election if Harris wins, because Russia prefers that former President Trump win the 2024 race.
- Europe > Russia (0.84)
- Asia > Russia (0.84)
- Asia > Middle East > Israel (0.26)
- (3 more...)
A Multimodal Framework for Deepfake Detection
Gandhi, Kashish, Kulkarni, Prutha, Shah, Taran, Chaudhari, Piyush, Narvekar, Meera, Ghag, Kranti
The rapid advancement of deepfake technology poses a significant threat to digital media integrity. Deepfakes, synthetic media created using AI, can convincingly alter videos and audio to misrepresent reality. This creates risks of misinformation, fraud, and severe implications for personal privacy and security. Our research addresses the critical issue of deepfakes through an innovative multimodal approach, targeting both visual and auditory elements. This comprehensive strategy recognizes that human perception integrates multiple sensory inputs, particularly visual and auditory information, to form a complete understanding of media content. For visual analysis, a model that employs advanced feature extraction techniques was developed, extracting nine distinct facial characteristics and then applying various machine learning and deep learning models. For auditory analysis, our model leverages mel-spectrogram analysis for feature extraction and then applies various machine learning and deep learningmodels. To achieve a combined analysis, real and deepfake audio in the original dataset were swapped for testing purposes and ensured balanced samples. Using our proposed models for video and audio classification i.e. Artificial Neural Network and VGG19, the overall sample is classified as deepfake if either component is identified as such. Our multimodal framework combines visual and auditory analyses, yielding an accuracy of 94%.
- Oceania > New Zealand > North Island > Auckland Region > Auckland (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > Montserrat (0.04)
- (3 more...)